Deep learning has been widely used in the perception (e.g., 3D object detection) of intelligent vehicle driving. Due to the beneficial Vehicle-to-Vehicle (V2V) communication, the deep learning based features from other agents can be shared to the ego vehicle so as to improve the perception of the ego vehicle. It is named as Cooperative Perception in the V2V research, whose algorithms have been dramatically advanced recently. However, all the existing cooperative perception algorithms assume the ideal V2V communication without considering the possible lossy shared features because of the Lossy Communication (LC) which is common in the complex real-world driving scenarios. In this paper, we first study the side effect (e.g., detection performance drop) by the lossy communication in the V2V Cooperative Perception, and then we propose a novel intermediate LC-aware feature fusion method to relieve the side effect of lossy communication by a LC-aware Repair Network (LCRN) and enhance the interaction between the ego vehicle and other vehicles by a specially designed V2V Attention Module (V2VAM) including intra-vehicle attention of ego vehicle and uncertainty-aware inter-vehicle attention. The extensive experiment on the public cooperative perception dataset OPV2V (based on digital-twin CARLA simulator) demonstrates that the proposed method is quite effective for the cooperative point cloud based 3D object detection under lossy V2V communication.
translated by 谷歌翻译
虽然基于深度学习的对象检测方法在传统的数据集上取得了有希望的结果,但它仍然具有挑战性,以从恶劣天气条件下捕获的低质量图像定位对象仍然具有挑战性。现有方法在平衡图像增强和对象检测的任务方面具有困难,或者通常忽略有利于检测的潜在信息。为了减轻这个问题,我们提出了一种新颖的图像自适应yolo(IA-YOLO)框架,其中可以适自动化的图像以获得更好的检测性能。具体地,提出了可视的图像处理(DIP)模块以考虑YOLO检测器的恶劣天气条件,其参数由小型卷积神经网络(CNN-PP)预测。我们以端到端的方式共同学习CNN-PP和YOLOV3,确保CNN-PP可以学习适当的DIP以以弱监督方式增强图像以进行检测。我们所提出的IA-Yolo方法可以在正常和恶劣天气条件下自适应地处理图像。实验结果非常令人鼓舞,展示了我们提出的IA-Yolo方法在雾和低光场景中的有效性。
translated by 谷歌翻译
Accurate airway extraction from computed tomography (CT) images is a critical step for planning navigation bronchoscopy and quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). The existing methods are challenging to sufficiently segment the airway, especially the high-generation airway, with the constraint of the limited label and cannot meet the clinical use in COPD. We propose a novel two-stage 3D contextual transformer-based U-Net for airway segmentation using CT images. The method consists of two stages, performing initial and refined airway segmentation. The two-stage model shares the same subnetwork with different airway masks as input. Contextual transformer block is performed both in the encoder and decoder path of the subnetwork to finish high-quality airway segmentation effectively. In the first stage, the total airway mask and CT images are provided to the subnetwork, and the intrapulmonary airway mask and corresponding CT scans to the subnetwork in the second stage. Then the predictions of the two-stage method are merged as the final prediction. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analysis demonstrate that our proposed method extracted much more branches and lengths of the tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.
translated by 谷歌翻译
Recently, Vehicle-to-Everything(V2X) cooperative perception has attracted increasing attention. Infrastructure sensors play a critical role in this research field, however, how to find the optimal placement of infrastructure sensors is rarely studied. In this paper, we investigate the problem of infrastructure sensor placement and propose a pipeline that can efficiently and effectively find optimal installation positions for infrastructure sensors in a realistic simulated environment. To better simulate and evaluate LiDAR placement, we establish a Realistic LiDAR Simulation library that can simulate the unique characteristics of different popular LiDARs and produce high-fidelity LiDAR point clouds in the CARLA simulator. Through simulating point cloud data in different LiDAR placements, we can evaluate the perception accuracy of these placements using multiple detection models. Then, we analyze the correlation between the point cloud distribution and perception accuracy by calculating the density and uniformity of regions of interest. Experiments show that the placement of infrastructure LiDAR can heavily affect the accuracy of perception. We also analyze the correlation between perception performance in the region of interest and LiDAR point cloud distribution and validate that density and uniformity can be indicators of performance.
translated by 谷歌翻译
车辆到设施通信技术的最新进展使自动驾驶汽车能够共享感官信息以获得更好的感知性能。随着自动驾驶汽车和智能基础设施的快速增长,V2X感知系统将很快在大规模部署,这引发了一个关键的问题:我们如何在现实世界部署之前在挑战性的交通情况下评估和改善其性能?收集多样化的大型现实世界测试场景似乎是最简单的解决方案,但昂贵且耗时,而且收藏量只能涵盖有限的情况。为此,我们提出了第一个开放的对抗场景生成器V2XP-ASG,该发电机可以为现代基于激光雷达的多代理感知系统产生现实,具有挑战性的场景。 V2XP-ASG学会了构建对抗性协作图,并以对抗性和合理的方式同时扰动多个代理的姿势。该实验表明,V2XP-ASG可以有效地确定各种V2X感知系统的具有挑战性的场景。同时,通过对有限数量的挑战场景进行培训,V2X感知系统的准确性可以进一步提高12.3%,而正常场景的准确性可以进一步提高4%。
translated by 谷歌翻译
电子设计自动化(EDA)社区一直在积极探索非常大规模的计算机辅助设计(VLSI CAD)的机器学习。许多研究探索了基于学习的技术,用于设计流中的跨阶段预测任务,以实现更快的设计收敛。尽管建筑机器学习(ML)模型通常需要大量数据,但由于缺乏大型公共数据集,大多数研究只能生成小型内部数据集进行验证。在本文中,我们介绍了第一个用于机器学习任务的开源数据集,称为CircuitNet。该数据集由基于6种开源RISC-V设计的商业设计工具的多功能运行中提取的10K以上样品组成。
translated by 谷歌翻译
Bird's Eye View(BEV)语义分割在自动驾驶的空间传感中起着至关重要的作用。尽管最近的文献在BEV MAP的理解上取得了重大进展,但它们都是基于基于摄像头的系统,这些系统难以处理遮挡并检测复杂的交通场景中的遥远对象。车辆到车辆(V2V)通信技术使自动驾驶汽车能够共享感应信息,与单代理系统相比,可以显着改善感知性能和范围。在本文中,我们提出了Cobevt,这是可以合作生成BEV MAP预测的第一个通用多代理多机构感知框架。为了有效地从基础变压器体系结构中的多视图和多代理数据融合相机功能,我们设计了融合的轴向注意力或传真模块,可以捕获跨视图和代理的局部和全局空间交互。 V2V感知数据集OPV2V的广泛实验表明,COBEVT实现了合作BEV语义分段的最新性能。此外,COBEVT被证明可以推广到其他任务,包括1)具有单代理多摄像机的BEV分割和2)具有多代理激光雷达系统的3D对象检测,并实现具有实时性能的最新性能时间推理速度。
translated by 谷歌翻译
现有的多代理感知系统假设每个代理都使用具有相同参数和体系结构的相同模型。由于置信度得分不匹配,因此可以通过不同的感知模型来降低性能。在这项工作中,我们提出了一个模型不足的多代理感知框架,以减少由模型差异造成的负面影响,而无需共享模型信息。具体而言,我们提出了一个可以消除预测置信度得分偏置的置信校准器。每个代理商在标准的公共数据库中独立执行此类校准,以保护知识产权。我们还提出了一个相应的边界盒聚合算法,该算法考虑了相邻框的置信度得分和空间协议。我们的实验阐明了不同试剂的模型校准的必要性,结果表明,提出的框架改善了异质剂的基线3D对象检测性能。
translated by 谷歌翻译
在本文中,我们调查了车辆到所有(V2X)通信的应用,以提高自动驾驶汽车的感知性能。我们使用新型视觉变压器提供了一个与V2X通信的强大合作感知框架。具体而言,我们建立了一个整体关注模型,即V2X-VIT,以有效地融合跨道路代理(即车辆和基础设施)的信息。 V2X-VIT由异质多代理自我注意和多尺度窗口自我注意的交替层组成,该层捕获了代理间的相互作用和全面的空间关系。这些关键模块在统一的变压器体系结构中设计,以应对常见的V2X挑战,包括异步信息共享,姿势错误和V2X组件的异质性。为了验证我们的方法,我们使用Carla和OpenCDA创建了一个大规模的V2X感知数据集。广泛的实验结果表明,V2X-VIT设置了3D对象检测的新最先进的性能,即使在恶劣的嘈杂环境下,也可以实现强大的性能。该代码可在https://github.com/derrickxunu/v2x-vit上获得。
translated by 谷歌翻译
采用车辆到车辆通信以提高自动驾驶技术中的感知性能,最近引起了相当大的关注;然而,对于基准测试算法的合适开放数据集已经难以开发和评估合作感知技术。为此,我们介绍了用于车辆到车辆的第一个大型开放模拟数据集。它包含超过70个有趣的场景,11,464帧和232,913帧的注释3D车辆边界盒,从卡拉的8个城镇和洛杉矶的数码镇。然后,我们构建了一个全面的基准,共有16种实施模型来评估若干信息融合策略〜(即早期,晚期和中间融合),最先进的激光雷达检测算法。此外,我们提出了一种新的细心中间融合管线,以从多个连接的车辆汇总信息。我们的实验表明,拟议的管道可以很容易地与现有的3D LIDAR探测器集成,即使具有大的压缩速率也可以实现出色的性能。为了鼓励更多的研究人员来调查车辆到车辆的感知,我们将释放数据集,基准方法以及HTTPS://mobility-lab.seas.ucla.edu/opv2v2v/中的所有相关代码。
translated by 谷歌翻译